Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44.481
Filtrar
1.
Methods Mol Biol ; 2787: 3-38, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38656479

RESUMO

In this chapter, we explore the application of high-throughput crop phenotyping facilities for phenotype data acquisition and the extraction of significant information from the collected data through image processing and data mining methods. Additionally, the construction and outlook of crop phenotype databases are introduced and the need for global cooperation and data sharing is emphasized. High-throughput crop phenotyping significantly improves accuracy and efficiency compared to traditional measurements, making significant contributions to overcoming bottlenecks in the phenotyping field and advancing crop genetics.


Assuntos
Produtos Agrícolas , Mineração de Dados , Processamento de Imagem Assistida por Computador , Fenótipo , Produtos Agrícolas/genética , Produtos Agrícolas/crescimento & desenvolvimento , Mineração de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos , Gerenciamento de Dados/métodos , Ensaios de Triagem em Larga Escala/métodos
2.
Methods Mol Biol ; 2787: 315-332, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38656500

RESUMO

Structural insights into macromolecular and protein complexes provide key clues about the molecular basis of the function. Cryogenic electron microscopy (cryo-EM) has emerged as a powerful structural biology method for studying protein and macromolecular structures at high resolution in both native and near-native states. Despite the ability to get detailed structural insights into the processes underlying protein function using cryo-EM, there has been hesitancy amongst plant biologists to apply the method for biomolecular interaction studies. This is largely evident from the relatively fewer structural depositions of proteins and protein complexes from plant origin in electron microscopy databank. Even though the progress has been slow, cryo-EM has significantly contributed to our understanding of the molecular biology processes underlying photosynthesis, energy transfer in plants, besides viruses infecting plants. This chapter introduces sample preparation for both negative-staining electron microscopy (NSEM) and cryo-EM for plant proteins and macromolecular complexes and data analysis using single particle analysis for beginners.


Assuntos
Microscopia Crioeletrônica , Substâncias Macromoleculares , Microscopia Crioeletrônica/métodos , Substâncias Macromoleculares/ultraestrutura , Substâncias Macromoleculares/química , Substâncias Macromoleculares/metabolismo , Proteínas de Plantas/metabolismo , Proteínas de Plantas/ultraestrutura , Proteínas de Plantas/química , Coloração Negativa/métodos
3.
Methods Mol Biol ; 2788: 81-95, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38656510

RESUMO

Atomic force microscopy (AFM) has broken boundaries in the characterization of the supramolecular architecture of cell wall assemblies and single cell wall polysaccharides at the nanoscale level. Moreover, AFM provides an opportunity to evaluate the mechanical properties of cell wall material which is not possible with any other method. However, in the case of plant tissue, the critical step is a smart sample preparation that should not affect the polysaccharide structure or assembly and on the other hand should consider device limitations, especially scanner ranges. In this chapter, the protocols from the sample preparation, including isolation of cell wall material and extraction of cell wall polysaccharide fractions, through AFM imaging of polysaccharide assemblies and single molecules until an image analysis to obtain quantitative data characterizing the biopolymers are presented.


Assuntos
Parede Celular , Microscopia de Força Atômica , Microscopia de Força Atômica/métodos , Parede Celular/ultraestrutura , Parede Celular/química , Polissacarídeos/química , Polissacarídeos/análise
4.
Lasers Med Sci ; 39(1): 112, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38656634

RESUMO

PURPOSE: To measure the dynamic characteristics of the flow field in a complex root canal model activated by two laser-activated irrigation (LAI) modalities at different activation energy outputs: photon-induced photoacoustic streaming (PIPS) and microshort pulse (MSP). METHODS: A phase-locked micro-scale Particle Image Velocimetry (µPIV) system was employed to characterise the temporal variations of LAI-induced velocity fields in the root canal following a single laser pulse. The wall shear stress (WSS) in the lateral root canal was subsequently estimated from the phase-averaged velocity fields. RESULTS: Both PIPS and MSP were able to generate the 'breath mode' of the irrigant current under all tested conditions. The transient irrigation flush in the root canal peaked at speeds close to 6 m/s. However, this intense flushing effect persisted for only about 2000 µs (or 3% of a single laser-pulse activation cycle). For MSP, the maximum WSS magnitude was approximately 3.08 Pa at an activation energy of E = 20 mJ/pulse, rising to 9.01 Pa at E = 50 mJ/pulse. In comparison, PIPS elevated the WSS to 10.63 Pa at E = 20 mJ/pulse. CONCLUSION: Elevating the activation energy can boost the peak flushing velocity and the maximum WSS, thereby enhancing irrigation efficiency. Given the same activation energy, PIPS outperforms MSP. Additionally, increasing the activation frequency may be an effective strategy to improve irrigation performance further.


Assuntos
Reologia , Humanos , Cavidade Pulpar/efeitos da radiação , Irrigação Terapêutica/métodos , Irrigação Terapêutica/instrumentação , Lasers , Irrigantes do Canal Radicular , Técnicas Fotoacústicas/métodos , Preparo de Canal Radicular/métodos , Preparo de Canal Radicular/instrumentação
5.
Med Biol Eng Comput ; 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38656734

RESUMO

This paper proposes a medical image fusion method in the non-subsampled shearlet transform (NSST) domain to combine a gray-scale image with the respective pseudo-color image obtained through different imaging modalities. The proposed method applies a novel improved dual-channel pulse-coupled neural network (IDPCNN) model to fuse the high-pass sub-images, whereas the Prewitt operator is combined with maximum regional energy (MRE) to construct the fused low-pass sub-image. First, the gray-scale image and luminance of the pseudo-color image are decomposed using NSST to find the respective sub-images. Second, the low-pass sub-images are fused by the Prewitt operator and MRE-based rule. Third, the proposed IDPCNN is utilized to get the fused high-pass sub-images from the respective high-pass sub-images. Fourth, the luminance of the fused image is obtained by applying inverse NSST on the fused sub-images, which is combined with the chrominance components of the pseudo-color image to construct the fused image. A total of 28 diverse medical image pairs, 11 existing methods, and nine objective metrics are used in the experiment. Qualitative and quantitative fusion results show that the proposed method is competitive with and even outpaces some of the existing medical fusion approaches. It is also shown that the proposed method efficiently combines two gray-scale images.

6.
Artif Intell Med ; 151: 102828, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38564879

RESUMO

Reliable large-scale cell detection and segmentation is the fundamental first step to understanding biological processes in the brain. The ability to phenotype cells at scale can accelerate preclinical drug evaluation and system-level brain histology studies. The impressive advances in deep learning offer a practical solution to cell image detection and segmentation. Unfortunately, categorizing cells and delineating their boundaries for training deep networks is an expensive process that requires skilled biologists. This paper presents a novel self-supervised Dual-Loss Adaptive Masked Autoencoder (DAMA) for learning rich features from multiplexed immunofluorescence brain images. DAMA's objective function minimizes the conditional entropy in pixel-level reconstruction and feature-level regression. Unlike existing self-supervised learning methods based on a random image masking strategy, DAMA employs a novel adaptive mask sampling strategy to maximize mutual information and effectively learn brain cell data. To the best of our knowledge, this is the first effort to develop a self-supervised learning method for multiplexed immunofluorescence brain images. Our extensive experiments demonstrate that DAMA features enable superior cell detection, segmentation, and classification performance without requiring many annotations. In addition, to examine the generalizability of DAMA, we also experimented on TissueNet, a multiplexed imaging dataset comprised of two-channel fluorescence images from six distinct tissue types, captured using six different imaging platforms. Our code is publicly available at https://github.com/hula-ai/DAMA.


Assuntos
Encéfalo , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Humanos , Aprendizado Profundo , Animais , Algoritmos , Neuroimagem/métodos
7.
Phys Med ; 121: 103365, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38663347

RESUMO

PURPOSE: To establish size-specific diagnostic reference levels (DRLs) for pulmonary embolism (PE) based on patient CT examinations performed on 74 CT devices. To assess task-based image quality (IQ) for each device and to investigate the variability of dose and IQ across different CTs. To propose a dose/IQ optimization. METHODS: 1051 CT pulmonary angiography dose data were collected. DRLs were calculated as the 75th percentile of CT dose index (CTDI) for two patient categories based on the thoracic perimeters. IQ was assessed with two thoracic phantom sizes using local acquisition parameters and three other dose levels. The area under the ROC curve (AUC) of a 2 mm low perfused vessel was assessed with a non-prewhitening with eye-filter model observer. The optimal IQ-dose point was mathematically assessed from the relationship between IQ and dose. RESULTS: The DRLs of CTDIvol were 6.4 mGy and 10 mGy for the two patient categories. 75th percentiles of phantom CTDIvol were 6.3 mGy and 10 mGy for the two phantom sizes with inter-quartile AUC values of 0.047 and 0.066, respectively. After the optimization, 75th percentiles of phantom CTDIvol decreased to 5.9 mGy and 7.55 mGy and the interquartile AUC values were reduced to 0.025 and 0.057 for the two phantom sizes. CONCLUSION: DRLs for PE were proposed as a function of patient thoracic perimeters. This study highlights the variability in terms of dose and IQ. An optimization process can be started individually and lead to a harmonization of practice throughout multiple CT sites.

8.
Sci Rep ; 14(1): 9554, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664440

RESUMO

While deep learning has become the go-to method for image denoising due to its impressive noise removal capabilities, excessive network depth often plagues existing approaches, leading to significant computational burdens. To address this critical bottleneck, we propose a novel lightweight progressive residual and attention mechanism fusion network that effectively alleviates these limitations. This architecture tackles both Gaussian and real-world image noise with exceptional efficacy. Initiated through dense blocks (DB) tasked with discerning the noise distribution, this approach substantially reduces network parameters while comprehensively extracting local image features. The network then adopts a progressive strategy, whereby shallow convolutional features are incrementally integrated with deeper features, establishing a residual fusion framework adept at extracting encompassing global features relevant to noise characteristics. The process concludes by integrating the output feature maps from each DB and the robust edge features from the convolutional attention feature fusion module (CAFFM). These combined elements are then directed to the reconstruction layer, ultimately producing the final denoised image. Empirical analyses conducted in environments characterized by Gaussian white noise and natural noise, spanning noise levels 15-50, indicate a marked enhancement in performance. This assertion is quantitatively corroborated by increased average values in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index for Color images (FSIMc), outperforming the outcomes of more than 20 existing methods across six varied datasets. Collectively, the network delineated in this research exhibits exceptional adeptness in image denoising. Simultaneously, it adeptly preserves essential image features such as edges and textures, thereby signifying a notable progression in the domain of image processing. The proposed model finds applicability in a range of image-centric domains, encompassing image processing, computer vision, video analysis, and pattern recognition.

9.
J Eat Disord ; 12(1): 50, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664846

RESUMO

BACKGROUND: The Functionality Appreciation Scale is a 7-item measure of an individual's appreciation of his or her body for what it can do and is capable of doing. While this instrument has been increasingly used in intervention-based research, its psychometric properties have not been extensively studied in non-English-speaking populations. The psychometric properties of a novel Spanish translation of the FAS were examined. METHODS: An online sample of 838 Spanish adults (mean age = 31.79 ± 11.95 years, 50.48% men) completed the Spanish FAS and validated measures of body appreciation, eating disorder symptomatology, intuitive eating, and life satisfaction. RESULTS: Exploratory factor analysis supported a 1-dimensional factor structure of the FAS, which was further supported by confirmatory factor analysis (SBχ²(14) = 83.82, SBχ²normed = 1.48, robust RMSEA = 0.094 (90% CI = 0.074, 0.115), SRMR = 0.040, robust CFI = 0.946, robust TLI = 0.924). Invariance across genders was shown, and there were no significant differences according to gender (t(417) = 0.77, p =.444, d = 0.07). Construct validity was also supported through significant associations with the other measures of the study. Incremental validity was established in women. Thus, appreciation of functionality predicted life satisfaction over and above the variance accounted for by other body image and eating disorder-related measures (F(4, 399) = 18.86, p <.001, ΔR2 = 0.03). CONCLUSIONS: These results support the psychometric properties of the Spanish FAS and demonstrate the importance of the appreciation of functionality in relation to a healthier body image and psychological wellbeing.

10.
Exp Dermatol ; 33(4): e15082, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38664884

RESUMO

As a chronic relapsing disease, psoriasis is characterized by widespread skin lesions. The Psoriasis Area and Severity Index (PASI) is the most frequently utilized tool for evaluating the severity of psoriasis in clinical practice. Nevertheless, long-term monitoring and precise evaluation pose difficulties for dermatologists and patients, which is time-consuming, subjective and prone to evaluation bias. To develop a deep learning system with high accuracy and speed to assist PASI evaluation, we collected 2657 high-quality images from 1486 psoriasis patients, and images were segmented and annotated. Then, we utilized the YOLO-v4 algorithm to establish the model via four modules, we also conducted a human-computer comparison through quadratic weighted Kappa (QWK) coefficients and intra-class correlation coefficients (ICC). The YOLO-v4 algorithm was selected for model training and optimization compared with the YOLOv3, RetinaNet, EfficientDet and Faster_rcnn. The model evaluation results of mean average precision (mAP) for various lesion features were as follows: erythema, mAP = 0.903; scale, mAP = 0.908; and induration, mAP = 0.882. In addition, the results of human-computer comparison also showed a median consistency for the skin lesion severity and an excellent consistency for the area and PASI score. Finally, an intelligent PASI app was established for remote disease assessment and course management, with a pleasurable agreement with dermatologists. Taken together, we proposed an intelligent PASI app based on the image YOLO-v4 algorithm that can assist dermatologists in long-term and objective PASI scoring, shedding light on similar clinical assessments that can be assisted by computers in a time-saving and objective manner.


Assuntos
Algoritmos , Aprendizado Profundo , Psoríase , Índice de Gravidade de Doença , Psoríase/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos
11.
Photoacoustics ; 38: 100607, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38665365

RESUMO

Ring-array photoacoustic tomography (PAT) system has been widely used in noninvasive biomedical imaging. However, the reconstructed image usually suffers from spatially rotational blur and streak artifacts due to the non-ideal imaging conditions. To improve the reconstructed image towards higher quality, we propose a concept of spatially rotational convolution to formulate the image blur process, then we build a regularized restoration problem model accordingly and design an alternating minimization algorithm which is called blind spatially rotational deconvolution to achieve the restored image. Besides, we also present an image preprocessing method based on the proposed algorithm to remove the streak artifacts. We take experiments on phantoms and in vivo biological tissues for evaluation, the results show that our approach can significantly enhance the resolution of the image obtained from ring-array PAT system and remove the streak artifacts effectively.

12.
Data Brief ; 54: 110379, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38623554

RESUMO

Detecting emergency aircraft landing sites is crucial for ensuring passenger and crew safety during unexpected forced landings caused by factors like engine malfunctions, adverse weather, or other aviation emergencies. In this article, we present a dataset consisting of Google Maps images with their corresponding masks, specifically crafted with manual annotations of emergency aircraft landing sites, distinguishing between safe areas with suitable conditions for emergency landings and unsafe areas presenting hazardous conditions. Drawing on detailed guidelines from the Federal Aviation Administration, the annotations focus on key features such as slope, surface type, and obstacle presence, with the goal of pinpointing appropriate landing areas. The proposed dataset has 4180 images, with 2090 raw images accompanied by their corresponding annotation instances. This dataset employs a semantic segmentation approach, categorizing the image pixels into two "Safe" and "Unsafe" classes based on authenticated terrain-specific attributes, thereby offering a nuanced understanding of the viability of various landing sites in emergency scenarios.

13.
Sci Rep ; 14(1): 9031, 2024 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-38641688

RESUMO

Microscopy is integral to medical research, facilitating the exploration of various biological questions, notably cell quantification. However, this process's time-consuming and error-prone nature, attributed to human intervention or automated methods usually applied to fluorescent images, presents challenges. In response, machine learning algorithms have been integrated into microscopy, automating tasks and constructing predictive models from vast datasets. These models adeptly learn representations for object detection, image segmentation, and target classification. An advantageous strategy involves utilizing unstained images, preserving cell integrity and enabling morphology-based classification-something hindered when fluorescent markers are used. The aim is to introduce a model proficient in classifying distinct cell lineages in digital contrast microscopy images. Additionally, the goal is to create a predictive model identifying lineage and determining optimal quantification of cell numbers. Employing a CNN machine learning algorithm, a classification model predicting cellular lineage achieved a remarkable accuracy of 93%, with ROC curve results nearing 1.0, showcasing robust performance. However, some lineages, namely SH-SY5Y (78%), HUH7_mayv (85%), and A549 (88%), exhibited slightly lower accuracies. These outcomes not only underscore the model's quality but also emphasize CNNs' potential in addressing the inherent complexities of microscopic images.


Assuntos
Microscopia , Neuroblastoma , Humanos , Redes Neurais de Computação , Algoritmos , Aprendizado de Máquina
14.
BMC Cardiovasc Disord ; 24(1): 217, 2024 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-38643100

RESUMO

BACKGROUND: During normal sinus rhythm, atrial depolarization is conducted from right atrium to left atrium through Bachmann's bundle, and a normal P wave axis which is measured on the frontal plane is between 0º and + 75º. The change of P wave polarity is helpful for the analysis of origin point. CASE PRESENTATION: We report a patient with negative P wave in lead I. The characteristics of QRS complex in leads V1 to V6 are helpful to preliminarily differential diagnosis. The 12-lead electrocardiogram (ECG) with correct limb leads (right arm-left arm) placement shows sinus rhythm with complete right bundle branch block (RBBB). CONCLUSIONS: The change of P wave polarity as well as characteristics of QRS complex can help identify limb-lead reversals.


Assuntos
Bloqueio de Ramo , Eletrocardiografia , Humanos , Bloqueio de Ramo/diagnóstico , Nó Sinoatrial , Átrios do Coração , Nó Atrioventricular
15.
Front Comput Neurosci ; 18: 1391025, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38634017

RESUMO

According to experts in neurology, brain tumours pose a serious risk to human health. The clinical identification and treatment of brain tumours rely heavily on accurate segmentation. The varied sizes, forms, and locations of brain tumours make accurate automated segmentation a formidable obstacle in the field of neuroscience. U-Net, with its computational intelligence and concise design, has lately been the go-to model for fixing medical picture segmentation issues. Problems with restricted local receptive fields, lost spatial information, and inadequate contextual information are still plaguing artificial intelligence. A convolutional neural network (CNN) and a Mel-spectrogram are the basis of this cough recognition technique. First, we combine the voice in a variety of intricate settings and improve the audio data. After that, we preprocess the data to make sure its length is consistent and create a Mel-spectrogram out of it. A novel model for brain tumor segmentation (BTS), Intelligence Cascade U-Net (ICU-Net), is proposed to address these issues. It is built on dynamic convolution and uses a non-local attention mechanism. In order to reconstruct more detailed spatial information on brain tumours, the principal design is a two-stage cascade of 3DU-Net. The paper's objective is to identify the best learnable parameters that will maximize the likelihood of the data. After the network's ability to gather long-distance dependencies for AI, Expectation-Maximization is applied to the cascade network's lateral connections, enabling it to leverage contextual data more effectively. Lastly, to enhance the network's ability to capture local characteristics, dynamic convolutions with local adaptive capabilities are used in place of the cascade network's standard convolutions. We compared our results to those of other typical methods and ran extensive testing utilising the publicly available BraTS 2019/2020 datasets. The suggested method performs well on tasks involving BTS, according to the experimental data. The Dice scores for tumor core (TC), complete tumor, and enhanced tumor segmentation BraTS 2019/2020 validation sets are 0.897/0.903, 0.826/0.828, and 0.781/0.786, respectively, indicating high performance in BTS.

16.
JMIR Med Inform ; 12: e55627, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38592758

RESUMO

BACKGROUND: In the evolving field of health care, multimodal generative artificial intelligence (AI) systems, such as ChatGPT-4 with vision (ChatGPT-4V), represent a significant advancement, as they integrate visual data with text data. This integration has the potential to revolutionize clinical diagnostics by offering more comprehensive analysis capabilities. However, the impact on diagnostic accuracy of using image data to augment ChatGPT-4 remains unclear. OBJECTIVE: This study aims to assess the impact of adding image data on ChatGPT-4's diagnostic accuracy and provide insights into how image data integration can enhance the accuracy of multimodal AI in medical diagnostics. Specifically, this study endeavored to compare the diagnostic accuracy between ChatGPT-4V, which processed both text and image data, and its counterpart, ChatGPT-4, which only uses text data. METHODS: We identified a total of 557 case reports published in the American Journal of Case Reports from January 2022 to March 2023. After excluding cases that were nondiagnostic, pediatric, and lacking image data, we included 363 case descriptions with their final diagnoses and associated images. We compared the diagnostic accuracy of ChatGPT-4V and ChatGPT-4 without vision based on their ability to include the final diagnoses within differential diagnosis lists. Two independent physicians evaluated their accuracy, with a third resolving any discrepancies, ensuring a rigorous and objective analysis. RESULTS: The integration of image data into ChatGPT-4V did not significantly enhance diagnostic accuracy, showing that final diagnoses were included in the top 10 differential diagnosis lists at a rate of 85.1% (n=309), comparable to the rate of 87.9% (n=319) for the text-only version (P=.33). Notably, ChatGPT-4V's performance in correctly identifying the top diagnosis was inferior, at 44.4% (n=161), compared with 55.9% (n=203) for the text-only version (P=.002, χ2 test). Additionally, ChatGPT-4's self-reports showed that image data accounted for 30% of the weight in developing the differential diagnosis lists in more than half of cases. CONCLUSIONS: Our findings reveal that currently, ChatGPT-4V predominantly relies on textual data, limiting its ability to fully use the diagnostic potential of visual information. This study underscores the need for further development of multimodal generative AI systems to effectively integrate and use clinical image data. Enhancing the diagnostic performance of such AI systems through improved multimodal data integration could significantly benefit patient care by providing more accurate and comprehensive diagnostic insights. Future research should focus on overcoming these limitations, paving the way for the practical application of advanced AI in medicine.

17.
Cell Rep Med ; 5(4): 101506, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38593808

RESUMO

Prostate cancer (PCa) is a common malignancy in males. The pathology review of PCa is crucial for clinical decision-making, but traditional pathology review is labor intensive and subjective to some extent. Digital pathology and whole-slide imaging enable the application of artificial intelligence (AI) in pathology. This review highlights the success of AI in detecting and grading PCa, predicting patient outcomes, and identifying molecular subtypes. We propose that AI-based methods could collaborate with pathologists to reduce workload and assist clinicians in formulating treatment recommendations. We also introduce the general process and challenges in developing AI pathology models for PCa. Importantly, we summarize publicly available datasets and open-source codes to facilitate the utilization of existing data and the comparison of the performance of different models to improve future studies.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Masculino , Humanos , Tomada de Decisão Clínica
18.
Phys Med Biol ; 69(10)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38593821

RESUMO

Objective. The textures and detailed structures in computed tomography (CT) images are highly desirable for clinical diagnosis. This study aims to expand the current body of work on textures and details preserving convolutional neural networks for low-dose CT (LDCT) image denoising task.Approach. This study proposed a novel multi-scale feature aggregation and fusion network (MFAF-net) for LDCT image denoising. Specifically, we proposed a multi-scale residual feature aggregation module to characterize multi-scale structural information in CT images, which captures regional-specific inter-scale variations using learned weights. We further proposed a cross-level feature fusion module to integrate cross-level features, which adaptively weights the contributions of features from encoder to decoder by using a spatial pyramid attention mechanism. Moreover, we proposed a self-supervised multi-level perceptual loss module to generate multi-level auxiliary perceptual supervision for recovery of salient textures and structures of tissues and lesions in CT images, which takes advantage of abundant semantic information at various levels. We introduced parameters for the perceptual loss to adaptively weight the contributions of auxiliary features of different levels and we also introduced an automatic parameter tuning strategy for these parameters.Main results. Extensive experimental studies were performed to validate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method can achieve better performance on both fine textures preservation and noise suppression for CT image denoising task compared with other competitive convolutional neural network (CNN) based methods.Significance. The proposed MFAF-net takes advantage of multi-scale receptive fields, cross-level features integration and self-supervised multi-level perceptual loss, enabling more effective recovering of fine textures and detailed structures of tissues and lesions in CT images.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Redes Neurais de Computação , Doses de Radiação , Razão Sinal-Ruído
19.
Surg Innov ; 31(3): 291-306, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38619039

RESUMO

OBJECTIVE: To propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest. METHODS: We employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method. RESULTS: The transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset. CONCLUSION: To the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.


Assuntos
Redes Neurais de Computação , Imagem Óptica , Humanos , Imagem Óptica/métodos , Neoplasias/cirurgia , Neoplasias/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Cirurgia Assistida por Computador/métodos
20.
Zebrafish ; 21(2): 73-79, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38621202

RESUMO

The goal of the University of Wisconsin-Milwaukee WInSTEP SEPA program is to provide valuable and relevant research experiences to students and instructors in diverse secondary educational settings. Introducing an online experience allows the expansion of a proven instructional research program to a national scale and removes many common barriers. These can include lack of access to zebrafish embryos, laboratory equipment, and modern classroom facilities, which often deny disadvantaged and underrepresented students from urban and rural school districts valuable inquiry-based learning opportunities. An online repository of zebrafish embryo imagery was developed in the Carvan laboratory to assess the effects of environmental chemicals. The WInSTEP SEPA program expanded its use as an accessible online tool, complementing the existing classroom experience of our zebrafish module. This virtual laboratory environment contains images of zebrafish embryos grown in the presence of environmental toxicants (ethanol, caffeine, and nicotine), allowing students to collect data on 19 anatomical endpoints and generate significant amounts of data related to developmental toxicology and environmental health. This virtual laboratory offers students and instructors the choice of data sets that differ in the independent variables of chemical concentration and duration of postfertilization exposure. This enables students considerable flexibility in establishing their own experimental design to match the curriculum needs of each instructor.


Assuntos
Estudantes , Peixe-Zebra , Animais , Humanos , Saúde Ambiental/educação , Aprendizagem , Laboratórios , Currículo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...